39 research outputs found

    Optimal scaling for the transient phase of the random walk Metropolis algorithm: The mean-field limit

    Full text link
    We consider the random walk Metropolis algorithm on Rn\mathbb{R}^n with Gaussian proposals, and when the target probability measure is the nn-fold product of a one-dimensional law. In the limit nn\to\infty, it is well known (see [Ann. Appl. Probab. 7 (1997) 110-120]) that, when the variance of the proposal scales inversely proportional to the dimension nn whereas time is accelerated by the factor nn, a diffusive limit is obtained for each component of the Markov chain if this chain starts at equilibrium. This paper extends this result when the initial distribution is not the target probability measure. Remarking that the interaction between the components of the chain due to the common acceptance/rejection of the proposed moves is of mean-field type, we obtain a propagation of chaos result under the same scaling as in the stationary case. This proves that, in terms of the dimension nn, the same scaling holds for the transient phase of the Metropolis-Hastings algorithm as near stationarity. The diffusive and mean-field limit of each component is a diffusion process nonlinear in the sense of McKean. This opens the route to new investigations of the optimal choice for the variance of the proposal distribution in order to accelerate convergence to equilibrium (see [Optimal scaling for the transient phase of Metropolis-Hastings algorithms: The longtime behavior Bernoulli (2014) To appear]).Comment: Published at http://dx.doi.org/10.1214/14-AAP1048 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonasymptotic bounds on the estimation error of MCMC algorithms

    Full text link
    We address the problem of upper bounding the mean square error of MCMC estimators. Our analysis is nonasymptotic. We first establish a general result valid for essentially all ergodic Markov chains encountered in Bayesian computation and a possibly unbounded target function ff. The bound is sharp in the sense that the leading term is exactly σas2(P,f)/n\sigma_{\mathrm {as}}^2(P,f)/n, where σas2(P,f)\sigma_{\mathrm{as}}^2(P,f) is the CLT asymptotic variance. Next, we proceed to specific additional assumptions and give explicit computable bounds for geometrically and polynomially ergodic Markov chains under quantitative drift conditions. As a corollary, we provide results on confidence estimation.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ442 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm). arXiv admin note: text overlap with arXiv:0907.491

    Analysis of Langevin Monte Carlo via convex optimization

    Full text link
    In this paper, we provide new insights on the Unadjusted Langevin Algorithm. We show that this method can be formulated as a first order optimization algorithm of an objective functional defined on the Wasserstein space of order 22. Using this interpretation and techniques borrowed from convex optimization, we give a non-asymptotic analysis of this method to sample from logconcave smooth target distribution on Rd\mathbb{R}^d. Based on this interpretation, we propose two new methods for sampling from a non-smooth target distribution, which we analyze as well. Besides, these new algorithms are natural extensions of the Stochastic Gradient Langevin Dynamics (SGLD) algorithm, which is a popular extension of the Unadjusted Langevin Algorithm. Similar to SGLD, they only rely on approximations of the gradient of the target log density and can be used for large-scale Bayesian inference

    Nonasymptotic bounds on the mean square error for MCMC estimates via renewal techniques

    Get PDF
    The Nummellin’s split chain construction allows to decompose a Markov chain Monte Carlo (MCMC) trajectory into i.i.d. "excursions". Regenerative MCMC algorithms based on this technique use a random number of samples. They have been proposed as a promising alternative to usual fixed length simulation [25, 33, 14]. In this note we derive nonasymptotic bounds on the mean square error (MSE) of regenerative MCMC estimates via techniques of renewal theory and sequential statistics. These results are applied to costruct confidence intervals. We then focus on two cases of particular interest: chains satisfying the Doeblin condition and a geometric drift condition. Available explicit nonasymptotic results are compared for different schemes of MCMC simulation
    corecore